332 resultados para Visual cortex

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our aim was to make a quantitative comparison of the response of the different visual cortical areas to selective stimulation of the two different cone-opponent pathways [long- and medium-wavelength (L/M)- and short-wavelength (S)-cone-opponent] and the achromatic pathway under equivalent conditions. The appropriate stimulus-contrast metric for the comparison of colour and achromatic sensitivity is unknown, however, and so a secondary aim was to investigate whether equivalent fMRI responses of each cortical area are predicted by stimulus contrast matched in multiples of detection threshold that approximately equates for visibility, or direct (cone) contrast matches in which psychophysical sensitivity is uncorrected. We found that the fMRI response across the two colour and achromatic pathways is not well predicted by threshold-scaled stimuli (perceptual visibility) but is better predicted by cone contrast, particularly for area V1. Our results show that the early visual areas (V1, V2, V3, VP and hV4) all have robust responses to colour. No area showed an overall colour preference, however, until anterior to V4 where we found a ventral occipital region that has a significant preference for chromatic stimuli, indicating a functional distinction from earlier areas. We found that all of these areas have a surprisingly strong response to S-cone stimuli, at least as great as the L/M response, suggesting a relative enhancement of the S-cone cortical signal. We also identified two areas (V3A and hMT+) with a significant preference for achromatic over chromatic stimuli, indicating a functional grouping into a dorsal pathway with a strong magnocellular input.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Gabor representations have been widely used in facial analysis (face recognition, face detection and facial expression detection) due to their biological relevance and computational properties. Two popular Gabor representations used in literature are: 1) Log-Gabor and 2) Gabor energy filters. Even though these representations are somewhat similar, they also have distinct differences as the Log-Gabor filters mimic the simple cells in the visual cortex while the Gabor energy filters emulate the complex cells, which causes subtle differences in the responses. In this paper, we analyze the difference between these two Gabor representations and quantify these differences on the task of facial action unit (AU) detection. In our experiments conducted on the Cohn-Kanade dataset, we report an average area underneath the ROC curve (A`) of 92.60% across 17 AUs for the Gabor energy filters, while the Log-Gabor representation achieved an average A` of 96.11%. This result suggests that small spatial differences that the Log-Gabor filters pick up on are more useful for AU detection than the differences in contours and edges that the Gabor energy filters extract.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Vernier acuity, a form of visual hyperacuity, is amongst the most precise forms of spatial vision. Under optimal conditions Vernier thresholds are much finer than the inter-photoreceptor distance. Achievement of such high precision is based substantially on cortical computations, most likely in the primary visual cortex. Using stimuli with added positional noise, we show that Vernier processing is reduced with advancing age across a wide range of noise levels. Using an ideal observer model, we are able to characterize the mechanisms underlying age-related loss, and show that the reduction in Vernier acuity can be mainly attributed to the reduction in efficiency of sampling, with no significant change in the level of internal position noise, or spatial distortion, in the visual system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Myopia (short-sightedness) is a common ocular disorder of children and young adults. Studies primarily using animal models have shown that the retina controls eye growth and the outer retina is likely to have a key role. One theory is that the proportion of L (long-wavelength-sensitive) and M (medium-wavelength-sensitive) cones is related to myopia development; with a high L/M cone ratio predisposing individuals to myopia. However, not all dichromats (persons with red-green colour vision deficiency) with extreme L/M cone ratios have high refractive errors. We predict that the L/M cone ratio will vary in individuals with normal trichromatic colour vision but not show a systematic difference simply due to refractive error. The aim of this study was to determine if L/M cone ratios in the central 30° are different between myopic and emmetropic young, colour normal adults. Information about L/M cone ratios was determined using the multifocal visual evoked potential (mfVEP). The mfVEP can be used to measure the response of visual cortex to different visual stimuli. The visual stimuli were generated and measurements performed using the Visual Evoked Response Imaging System (VERIS 5.1). The mfVEP was measured when the L and M cone systems were separately stimulated using the method of silent substitution. The method of silent substitution alters the output of three primary lights, each with physically different spectral distributions to control the excitation of one or more photoreceptor classes without changing the excitation of the unmodulated photoreceptor classes. The stimulus was a dartboard array subtending 30° horizontally and 30° vertically on a calibrated LCD screen. The m-sequence of the stimulus was 215-1. The N1-P1 amplitude ratio of the mfVEP was used to estimate the L/M cone ratio. Data were collected for 30 young adults (22 to 33 years of age), consisting of 10 emmetropes (+0.3±0.4 D) and 20 myopes (–3.4±1.7 D). The stimulus and analysis techniques were confirmed using responses of two dichromats. For the entire participant group, the estimated central L/M cone ratios ranged from 0.56 to 1.80 in the central 3°-13° diameter ring and from 0.94 to 1.91 in the more peripheral 13°-30° diameter ring. Within 3°-13°, the mean L/M cone ratio of the emmetropic group was 1.20±0.33 and the mean was similar, 1.20±0.26, for the myopic group. For the 13°-30° ring, the mean L/M cone ratio of the emmetropic group was 1.48±0.27 and it was slightly lower in the myopic group, 1.30±0.27. Independent-samples t-test indicated no significant difference between the L/M cone ratios of the emmetropic and myopic group for either the central 3°-13° ring (p=0.986) or the more peripheral 13°-30° ring (p=0.108). The similar distributions of estimated L/M cone ratios in the sample of emmetropes and myopes indicates that there is likely to be no association between the L/M cone ratio and refractive error in humans.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Converging evidence from epidemiological, clinical and neuropsychological research suggests a link between cannabis use and increased risk of psychosis. Long-term cannabis use has also been related to deficit-like “negative” symptoms and cognitive impairment that resemble some of the clinical and cognitive features of schizophrenia. The current functional brain imaging study investigated the impact of a history of heavy cannabis use on impaired executive function in first-episode schizophrenia patients. Whilst performing the Tower of London task in a magnetic resonance imaging scanner, event-related blood oxygenation level-dependent (BOLD) brain activation was compared between four age and gender-matched groups: 12 first-episode schizophrenia patients; 17 long-term cannabis users; seven cannabis using first-episode schizophrenia patients; and 17 healthy control subjects. BOLD activation was assessed as a function of increasing task difficulty within and between groups as well as the main effects of cannabis use and the diagnosis of schizophrenia. Cannabis users and non-drug using first-episode schizophrenia patients exhibited equivalently reduced dorsolateral prefrontal activation in response to task difficulty. A trend towards additional prefrontal and left superior parietal cortical activation deficits was observed in cannabis-using first-episode schizophrenia patients while a history of cannabis use accounted for increased activation in the visual cortex. Cannabis users and schizophrenia patients fail to adequately activate the dorsolateral prefrontal cortex, thus pointing to a common working memory impairment which is particularly evident in cannabis-using first-episode schizophrenia patients. A history of heavy cannabis use, on the other hand, accounted for increased primary visual processing, suggesting compensatory imagery processing of the task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction Decreased water displacement following increased neural activity has been observed using diffusion-weighted functional MRI (DfMRI) at high b-values. The physiological mechanisms underlying the diffusion signal change may be unique from the standard blood oxygenation level-dependent (BOLD) contrast and closer to the source of neural activity. Whether DfMRI reflects neural activity more directly than BOLD outside the primary cerebral regions remains unclear. Methods Colored and achromatic Mondrian visual stimuli were statistically contrasted to functionally localize the human color center Area V4 in neurologically intact adults. Spatial and temporal properties of DfMRI and BOLD activation were examined across regions of the visual cortex. Results At the individual level, DfMRI activation patterns showed greater spatial specificity to V4 than BOLD. The BOLD activation patterns were more prominent in the primary visual cortex than DfMRI, where activation was localized to the ventral temporal lobe. Temporally, the diffusion signal change in V4 and V1 both preceded the corresponding hemodynamic response, however the early diffusion signal change was more evident in V1. Conclusions DfMRI may be of use in imaging applications implementing cognitive subtraction paradigms, and where highly precise individual functional localization is required.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.